|
For a class of predicates defined on a set and a set of samples , where , the empirical frequency of on is . The Uniform Convergence Theorem states, roughly,that if is "simple" and we draw samples independently (with replacement) from according to a distribution , then with high probability all the empirical frequency will be close to its expectation, where the expectation is given by . Here "simple" means that the Vapnik-Chernovenkis dimension of the class is small relative to the size of the sample. In other words, a sufficiently simple collection of functions behaves roughly the same on a small random sample as it does on the distribution as a whole. ==Uniform convergence theorem statement〔(Martin Anthony Peter,l.Bartlett. Neural Network Learning: Theoretical Foundations,Pages-46-50.First Edition,1999.Cambridge University Press, ISBN 0-521-57353-X )〕 == If is a set of -valued functions defined on a set and is a probability distribution on then for and a positive integer, we have, : for some where, for any , : , : and . indicates that the probability is taken over consisting of i.i.d. draws from the distribution . is defined as: For any -valued functions over and , : . And for any natural number the shattering number is defined as. : . From the point of Learning Theory one can consider to be the Concept/Hypothesis class defined over the instance set . Before getting into the details of the proof of the theorem we will state Sauer's Lemma which we will need in our proof. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Uniform convergence (combinatorics)」の詳細全文を読む スポンサード リンク
|